discriminatory practice
Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI
Islam, Mst Rafia, Wasi, Azmine Toushik
AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.
- Asia > Bangladesh (0.05)
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
FairHome: A Fair Housing and Fair Lending Dataset
Bagalkotkar, Anusha, Karmakar, Aveek, Arnson, Gabriel, Linda, Ondrej
We present a Fair Housing and Fair Lending dataset (FairHome): A dataset with around 75,000 examples across 9 protected categories. To the best of our knowledge, FairHome is the first publicly available dataset labeled with binary labels for compliance risk in the housing domain. We demonstrate the usefulness and effectiveness of such a dataset by training a classifier and using it to detect potential violations when using a large language model (LLM) in the context of real-estate transactions. We benchmark the trained classifier against state-of-the-art LLMs including GPT-3.5, GPT-4, LLaMA-3, and Mistral Large in both zero-shot and fewshot contexts. Our classifier outperformed with an F1-score of 0.91, underscoring the effectiveness of our dataset. WARNING: Some of the examples included in the paper are not polite, in so far as they reveal bias that might feel discriminatory to the readers.
- North America > United States > Washington > King County > Seattle (0.14)
- North America > United States > Washington > King County > Kirkland (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- (2 more...)
- Law (1.00)
- Banking & Finance > Real Estate (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
Global Big Data Conference
As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly. In 2021, across the world, governing bodies have been working to regulate how AI and ML systems are used. From the UK to the EU to China, regulations on how industries should monitor their algorithms, best practices for auditing and frameworks for more transparent AI systems are on the rise.
- Asia > China (0.26)
- North America > United States > New York (0.08)
- North America > United States > Colorado (0.06)
- (2 more...)
- Government (1.00)
- Law > Statutes (0.95)
Why 2022 is only the beginning for AI regulation
Did you miss a session at the Data Summit? As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly. In 2021, across the world, governing bodies have been working to regulate how AI and ML systems are used.
- North America > United States > New York (0.05)
- North America > United States > Colorado (0.05)
- North America > Canada (0.05)
- (4 more...)
- Law > Statutes (1.00)
- Information Technology (1.00)
- Health & Medicine (1.00)
- (2 more...)
How to prioritise humans in artificial intelligence design for business
Through the pervasive use of massive amounts of data to automate decisions and processes, artificial intelligence (AI) constitutes one of the most impactful developments for businesses and organisations in general. However, this fast-paced and unstoppable trend raises ethical issues. How can we ensure that AI development is fair, when the algorithms at its core are designed with (often unconscious) racist, sexist, or other biases? Lorena Blasco-Arcas and Hsin-Hsuan Meg Lee propose a human-centred view for the design of specific frameworks and regulatory systems. "Okay, Google, what's the weather today?" "Sorry, I don't understand."
3 steps businesses can take to reduce bias in AI systems
"Okay, Google, what's the weather today?" "Sorry, I don't understand." Does the experience--interacting with smart machines that don't respond to orders--sound familiar? This failure may leave people feeling dumbfounded, as if their intelligence were not on the same wavelength as the machines'. While this is not the intention of AI development (to interact selectively), such incidents are likely more frequent for "minorities" in the tech world. The global artificial intelligence (AI) software market is forecast to boom in the coming years, reaching around 126 billion US dollars by 2025.
- South America (0.05)
- North America > Central America (0.05)
- North America > Canada > Ontario (0.05)
- (3 more...)
- Information Technology (0.71)
- Law > Civil Rights & Constitutional Law (0.52)
r/MachineLearning - [D] Bias and Fairness in the ML community
In the past years we've had a growing discussion about how to reduce social bias and increase fairness, which is good. We wouldn't want our models to be discriminatory because of discriminatory historical data. However what I've noticed is that some have taken this as an opportunity to poisen the well by suggesting active discriminatory practices in seductive language and insidious framing. "Debiasing" AI is not enough. We need to proactively use computational decision making to correct for injustice.
- Media > News (0.40)
- Law > Civil Rights & Constitutional Law (0.39)
The tangled relationship between AI and human rights
It was a pleasant 21 degrees in New York when computers defeated humanity -- or so many people thought. That Sunday in May 1997, Garry Kasparov, a prodigal chess grandmaster and world champion, was beaten by Deep Blue, a rather unassuming black rectangular computer developed by IBM. In the popular imagination, it seemed like humanity had crossed a threshold -- a machine had defeated one of the most intelligent people on the planet at one of the most intellectually challenging games we know. The age of AI was upon us. While Deep Blue was certainly an impressive piece of technology, it was no more than a supercharged calculating machine.
- Leisure & Entertainment > Games > Chess (1.00)
- Law (1.00)
- Information Technology (1.00)